home *** CD-ROM | disk | FTP | other *** search
-
- Alan Mathison Turing, in his paper "Computing Machinery and Intelligence"1, proposes the
- question 'Can machines think?'. In considering this question Turing concludes that it lacks
- a concrete methodological goal and decides to replace the original question with the
- 'Imitation Game'. The Turing test, as the Turing's 'Imitation Game' is generally called, is
- a procedure to test whether a machine is capable of humanlike thought. Simply described, an
- interrogator sits with a teletype machine isolated from two correspondents: one is another
- person, one is a computer. By asking questions through the teletype machine and studying the
- responses, the isolated person tries to determine which correspondent, A or B, is human and
- which the computer. If that proves impossible, the computer is credited with having passed
- the test.2 Turing's purpose in designing the test was to standardize a method for
- determining if machines can act in a human way.
- Instead of asking a rhetorical question that can be answered in a myriad of ways, Turing
- focused on an empirical criterion for judging the "humanness" of a machine. This new
- criterion, the Turing test, compares the capabilities of humans and machines. If a machine
- is indistinguishable from the human in all possible ways, Turing suggested we must accord
- thought to this machine. As Turing put it, "If a computer could not be distinguished from
- a human, then fair play would oblige one to say that it is thinking."3 To this day, the
- Turing test is heralded as a criterion that must be surpassed before a machine can claim
- the attribute of human thought.
- Keith Gunderson in his article, "The Imitation Game"4, considers the soundness and
- implications of Turing's 'Imitation Game'. Gunderson takes Turing's statements to mean that
- a machine that can play the 'Imitation Game' can be ascribed with thought.5 I do not
- believe that this is a statement that Turing would agree with completely. Turing thought
- that the question "Can machines think?" was too imprecise. His goal was to replace the
- question with a different question that was closely related. The 'Imitation Game', was a
- first step in deciding if machines could exhibit humanlike behavior, not necessarily
- thought. For Gunderson, the Turing test becomes method of determining thought, which goes
- beyond the scope that Turing assessed of his test.6 I do not believe Turing would grant a
- machine thought if it merely passed his test. He believed that once a computer is
- indistinguishable from a human it must be thought of as 'thinking', yet his test is not a
- sufficient criterion for this claim, merely a necessary one. I will put aside this
- objection to Gunderson's reformulation of Turing's statements and discuss the reservations
- that he has about the Turing test. To illustrate the absurdity of the Turing test, Gunderson
- reformulates Turing's original question from "Can machines think?" to "Can rocks imitate?"
- He then follows the same steps as Turing did in the configuration of the 'Imitation Game',
- except that instead of computers there are rocks and instead of asking questions the
- interrogator's foot is stepped on. The interrogator then must decide which entity, A or B,
- is a rock and which is a human. This new 'Rock test' is similar in structure to the Turing
- test but it concludes with an absurd statement, "Rocks can think." Gunderson suggests that
- his facetious parody of the Turing test illustrates two points. First, we cannot conclude
- that rocks can imitate, but merely "that they are able to be rigged in such a way that they
- could be substituted for a human being in a toe-stepping game"7 Second, is the idea that
- thinking can not be measured by net results, such as those exhibited in the Turing test or
- the 'Rock test'. The Turing test is fundamentally incomplete in its design, since if the
- principles are applied to rocks they yield the same results as humans. Gunderson continues
- with the example of a door attendant and an electric door. Both help other people exit a
- building by opening the doors, yet we do not equate both with the attribute of thought. The
- net results are the same: I am assisted in exiting a building. However, I have a
- drastically different impression of the door attendant's capabilities from the electric
- door's. Similarly, the Turing test may produce the same net results, namely that both play
- the game similarly, yet we do not attribute thought to all participants who are able to
- produce the same results. Gunderson suggests that the Turing test fails to account for this
- difference and is, therefore, incomplete in judging whether an object being tested exhibits
- thought. At first glance, I found that the analogy that Gunderson employs to illustrate his
- point effective. Nonetheless, upon a more scrupulous analysis substantial defects become
- apparent in this seemingly parallel example. Gunderson takes the liberty to ask "Can rocks
- imitate?" but when designing his test he uses the concept of a 'rock-box'. So although the
- question deals with rocks, the game uses a more ambiguous 'rock-box' which, according to the
- functions that it performs, consists of a mechanical device with electric eye sensors, and
- some sort of self-propelling automata. Even the label of 'rock-box' is misleading because
- the mechanism that Gunderson is describing has to be significantly more complex than a mere
- box full of rocks; it has to contain a combination of fine-tuned mechanisms and series of
- instructions that even today would prove hard to devise. A similar inconsistency is not
- apparent in Turing's 'Imitation Game'. The question that Gunderson should be asking is "Can
- a machine(rock-box) imitate?" If this were the question, then I would be obliged to say
- yes. The 'rock-box', if it was designed in a sophisticated manner, could replicate the
- motions of a human foot. Gunderson also interchanges, even equates, the words 'imitation'
- and 'thought'. There is a subtle difference between what Gunderson is asking in his
- question and what Turing is asking in his. Gunderson's question revolves around 'imitation'
- while Turing's around 'thought'. "The Imitation Game', by virtue of its name, can be
- misleading. Turing's final goal in deciding whether a machine can exhibit humanlike
- behavior is whether it is indistinguishable from a human, not merely that it imitates one.
- Gunderson's question deals with the imitation of a human foot, a deceptively weaker claim
- than that of Turing. Gunderson takes imitation and thought to be the same thing in the
- Turing test. This is not so for Turing who does not claim that imitation is a suitable
- replacement for thought.8 Although Gunderson's example is partly designed to exaggerate the
- absurdity of Turing's arguments, the basic tenets of what Gunderson's and Turing's tests are
- supposed to be testing are drastically different. The replication of a human foot motion is
- almost peripheral to human thought, while the replication of human conversation and
- interaction seems much more central to it. The thought and level of complexity involved in
- using the foot to stomp on someone is minuscule compared to what it takes to interact with a
- human in an intelligible manner. Thus, whether a machine can stomp in proper way is
- irrelevant, or hardly relevant, to human thought, while the skills needed to pass the Turing
- test are much more so. Gunderson continues his criticism of the Turing test by employing
- Scriven's discussion of "the performatory problem". Gunderson takes a certain portion of
- Scriven's account in making his point. He makes a distinction between "achieving a similar
- result" and "doing the same thing".9 For Gunderson, and Scriven, these two are not the
- same. The Turing test is designed to judge the similarity of a certain result, namely
- humanlike behavior. This does not automatically imply that if a machine and human achieve
- the same level of results, they are engaging in the same process in achieving this results.
- Humans, in order to converse intelligently, have to think. Computers, if they are able to
- converse intelligently, might also be thinking but there is no way to verify this with the
- Turing test. Gunderson adapts Scriven's "performatory problem" to attack the necessity in
- imputing that a machine has thought. People and machines are clearly different in their
- composition, yet it is possible for them to achieve same results in certain functions. It
- is not clear whether passing the Turing test is a clear indication that a machine is
- engaging in the same processes that a human is. While imitating results, a machine may not
- be imitating method during the course of the Turing test. Unfortunately, we do not know
- what method is quintessential to human thought. In not being able to take 'doing the same
- thing' into account, the Turing test is inadequate to guarantee that a machine is exhibiting
- thought. Gunderson continues his examination of the 'Imitation Game' by elucidating another
- shortcoming of this test. He aptly points out that human beings are judged in a greater
- context than is outside the Turing test. Gunderson claims that "we would only regard
- it(Turing test) as one of the many examples we might give of Peterson's mental
- capacities."10 The suggestion is that the Turing test is not representative of all that is
- required for thought. We see human beings in a greater context than just participating in a
- certain test. It is difficult, if not impossible, to disassociate all the other examples of
- human thought when judging a human's performance in the Turing test. Machines, on the other
- hand, do not have the privilege that humans have in taking the test, namely being associated
- with numerous other activities and examples that are deemed to be a result of thought. For
- Gunderson, the Turing test by itself creates an incomplete picture of human behavior. The
- 'Imitation Game' has to be included within a greater context in order for us to associate
- thought with a machine that has passed the test. "Thinking, whatever positive
- characterization or account is correct, is not something which any one example will explain
- or decide." It is Gunderson's argument that Turing's 'Imitation Game' is just once example
- of thought. By being one example, the test is an inadequate procedure for testing the
- presence of thought. Due to the layout of the Turing test, I believe that Gunderson's claim
- is premature. The interaction between the interrogator and the correspondent is only
- limited by their interface, a teletype machine. Turing's goal was not to design questions
- or techniques within the framework of the game. Essentially, the interrogator may demand
- anything from the correspondents, except anything that would require them to perform some
- physical feat or demonstration. Intellectually, the Turing test has no theoretical limits.
- The interrogator may argue, question, observe and interact in all possible way with the
- correspondents. The imitation game might last an unlimited amount of time and the
- interrogator may even demand that the correspondents demonstrate a sense of opinion,
- judgment, reflection, deliberation, wonder, calculation and even emotion. Granted, it is a
- different situation interacting with a live person, but the limitations that the Turing test
- imposes on the interaction between correspondent and interrogator are exaggerated by
- Gunderson. Turing designed his question and answer method in a versatile way: his test
- "seems to be suitable for introducing almost any one of the fields of human endeavor that we
- wish to include."11 Either Gunderson failed to realize this remark or he failed to properly
- substantiate his claim. He gives no examples, that are not physical demonstrations, that
- can not be incorporated within the Turing test. Gunderson tries to clarify his criticisms of
- Turing, through a Vacuum cleaner salesperson and homemaker example. The scenario involves
- the salesperson trying to persuade the homemaker into purchasing the 'all-purpose Swiss 600
- vacuum' cleaner. A misunderstanding occurs when the homemaker demands to see the features
- of this vacuum cleaner. The salesperson can only demonstrate one example (picking up dust)
- and nothing else. The homemaker seems disappointed, since she assumed that by 'all-purpose'
- the vacuum cleaner could do a multitude of things, not just one. Thus, how can the
- salesperson call the Swiss 600 'all purpose', yet only give one example of it's 'all
- purpose' capabilities? The analogy that Gunderson tries to propound is that 'all-purpose'
- and 'thinking' are terms that share certain features.12 Thinking, for Gunderson and most
- other people, is not an example of just one thing. I, for instance, would not be convinced
- if something was able to play cards that it was thinking. Furthermore, I still would not be
- convinced if that something was able to utter sounds in a coherent way. Gunderson suggests
- that thinking has to be viewed in a holistic context, not merely by one activity. It is not
- enough to site one example and the immediately decide whether a thing 'thinks', or for that
- matter is 'all-purpose'. The crucial criticism of Turing comes when Gunderson suggests that
- the Turing test is just one example of thought. This is parallel to the sucking of dust
- being one example of the 'all-purpose' Swiss 600. The 'Imitation Game', being one example
- of a 'thinking' activity, fails to convince Gunderson that a machine can think simply
- because it is one illustration of 'thinking'. One example can not represent or justify a
- word such as 'all-purpose' or 'thinking'. Therefore, both the homemaker and Gunderson are
- unconvinced of the Swiss 600's and computer's capabilities. Trusting one example(the Turing
- test) is not enough to convince Gunderson that machines can be capable of thought: It is
- because thinking cannot be identified with what can be shown by any one example or type of
- example; thus Turing's approach to the question "Can a machine think?" via the imitation
- game is less than convincing... Turing, like the vacuum salesman, has trouble making his
- sale.13 Yet, Gunderson does not clearly state how many examples would be needed to satisfy
- his apprehension about admitting that a machine can think. He does not provide criteria for
- deciding if a machine thinks. Does the object have to be composed of cells, like a human
- being or does it merely need to exhibit all the same behavior as one? Gunderson claims that
- 'thinking' is inherent in our conception of a human, but he does not state what is inherent
- in our labeling something as 'thinking'.14 We may question Gunderson's analogy by asking "Is
- the Turing test solely one example of 'thinking'?" I believe Turing would argue that the
- his test is not a strictly defined example such as swimming, playing cards or an activity
- with clearly particular and stringent boundaries. Gunderson fails to consider the
- 'Imitation Game' as a dynamic framework that can not be categorized within the strict
- boundaries of one example. Instead of being an example of 'thinking', the Turing test may
- be envisioned as a set or collection of examples of 'thinking'. Whether this set of
- examples is sufficient to justify thought remains to be resolved. Gunderson, if he agreed
- with this view, would probably disagree that the test is enough. What is more important, is
- that Gunderson hastily classifies the Turing test as a mere example of 'thinking' rather
- than a framework that, to a large degree, exemplifies 'thinking'. I may be a thousand miles
- away from my friend, but I am still able to communicate with her in a very similar fashion
- than if she was right next to me. I am able to sense her emotion and judge her intellectual
- capabilities through a medium that does not involve physical or immediate contact. In a
- similar sense, the Turing test's restrictions are only limited in the way I correspond with
- my distant friend. Whether this is enough of a limitation to prevent me from judging my
- friend or computer as thinking is the essential question. Gunderson takes the 'Imitation
- Game' as one example of a multitude of examples that would identify a human as thinking.
- His notion of the test is so restrictive that it prevents the interrogator from judging the
- correspondents' 'thinking' capabilities. I would contend that such a conception of the
- Turing test is inaccurate. The test, instead of being an example, is a system that helps us
- determine if the object being tested exhibits thought. Therefore, many different examples
- of human actions that exhibit thought can be incorporated within the test instead simply
- one. The restrictions of the test are very similar to the restrictions that humans have
- when interacting with each other. The two views of the Turing test, Gunderson's one example
- of 'thinking' and Turing's comprehensive method for determining if a machine can exhibit
- humanlike behavior, stem from different perceptions of thought. Turing sought to isolate
- the factors that he considered sufficient to exhibit thought and incorporated them in his
- test. Gunderson's view of the thought is that it is inseparable from the other activities
- that humans perform. For Gunderson, when these activities are conglomerated they constitute
- 'thinking'. Even if a Turing machine was more than just an example of 'thinking', maybe
- even several examples, Gunderson still believes that it still fails to satisfy our intuitive
- concept of 'thinking'.15 Gunderson poses the question of 'how many examples would satisfy
- our criteria of thought?" He does not answer this question. Nonetheless, Gunderson,
- without defining the range of examples that would satisfy this question claims that Turing's
- machine is not within these bounds. I find this hypothesis incomplete and contradictory,
- since a limit has to be established before one judges an object to be within or beyond that
- limit. Gunderson is judging the Turing test without providing definite criterion for an
- object to be 'thinking'. Gunderson's last critical remark of the Turing test has not much to
- do with the 'Imitation Game' as it does with the replication of human behavior. Primarily,
- Gunderson claims that much of what humans do requires a limited amount of thought. For
- example, a person adding and subtracting numbers can think about something else while he or
- she is engaged in this activity. So if a machine can replicate behavior, such a counting
- and subtracting, it does not necessarily mean that it thinks. Tasks, such as inventing
- original solutions or comprehending a situation from different perspectives and levels, are
- much better criteria for thought. Still, we do not have a definite qualitative scale for
- thoughts so we can not be completely positive what tasks are indicative of human thought.
- At what level of complexity is an action derived from a thought? Gunderson does not attempt
- to answer this question but neither does Turing. There exist two general views of the Turing
- test. Gunderson's very exclusive view of the test's capabilities: The test is a mere
- example of 'thinking' and has to substantiated with other examples to illustrate the wide
- spectrum that we consider to be 'thinking'. On the other hand, there is my assessment, and
- I believe Turing's assessment also, of the test as a more inclusive tool that can be adapted
- to include every constitutive element of thought. The parameters of the test are defined
- as such: "The object of the game for the interrogator is to determine which of the other
- two is the man and which is machine"16 I believe that these parameters are essential and
- sufficient in judging an object as having thought. Our only means of judging whether
- something is 'thinking' is by judging whether it imitates us, 'thinking' beings, in its
- intellectual behavior. A more general question arises in how I am able to determine if other
- people are thinking? A large part in attributing though to an object is done automatically
- and without a series of tests. When I meet another person I do not ask myself, "Is that
- person thinking?" My mind is preconditioned so that when I meet another human I expect the
- same behavior as I exhibit. Thus, it is enough for me to even glance at another human being
- and I immediately assume that human is thinking. I would not interact with them, observe
- them and then according to my judgment deem them as thinking or not. By being repeatedly
- exposed to humans that think, I automatically induce that all humans also think. Machines,
- especially computers, do not have the luxury of having this opinion thought about them by
- humans. An even more general question arises of whether behavior is enough to prove
- 'thought'. Or, is imitation, even if this imitation was perfect in all means, enough to
- prove thought? Although Gunderson's concern is whether the Turing test is able to fully
- incorporate all the essential behavior of though, we might not consider this enough to
- guarantee thought. Turing's view on the matter is this: A reason for believing in the
- possibility of making thinking machinery is that it is possible to make machinery to imitate
- any small part of a man. The microphone does this for the ear, and the camera for the eye.
- One way of setting about our task of building a thinking machine would be to take person as
- a whole and to try to replace the parts of him by machinery.17 Turing, in this statement,
- takes a materialist view of the situation. If an android was to be constructed identical,
- part for part, to a human, then it would have to exhibit thought. This is in contrast to a
- dualist approach that would not ensure that physically identical objects would both have
- minds. It is possible that thoughts, derivatives of the mind, can only be granted by a
- supreme being or by an act that is beyond the material. So, even if a computer was
- identical in all functions with a human, this would not be sufficient to satisfy the dualist
- that this machine is 'thinking'.
- Even if we accept a materialist view, we might question whether identical behavior by two
- objects, with two different mechanisms for attaining that behavior, both can be thought of
- as thinking. If we built an android that would be indistinguishable from a human in terms
- of its behavior yet would not contain a brain but a different mechanism for realizing this
- behavior, would we deem it as 'thinking'? Gunderson addresses this discrepancy between
- 'behavior' and 'method of materializing this behavior'. Turing's statements about
- identical behavior among a machine and a human justifying thought in the machine are
- vulnerable to attack here. He takes it for granted that this identity is not automatically
- granted and that it could be the case than only a certain procedure produces thought.
- Similar behavior may not be the result of thought in all cases, as Gunderson tries to
- establish. It seems that Turing is taking a neo-physicalist account of thought. The
- phenomena of thought can only be described in spatiotemporal terms and consequently any
- notion of thought can in principle be reduced to an empirically verifiable physical
- statement. If such statements are identical, though not necessarily manifested in the same
- fashion, then thoughts occurs. As long as behavior is synonymous with a certain reality,
- namely thought, then the mechanism or technique that is required to produce this behavior
- (or thought) can be varied.
- The parameters that the Turing test provides are only limited by the limits of human
- interaction.18 If the limits of human interaction are sufficient to determine if an object
- has thought, the I believe that the Turing test is a fair method of determining if machines
- think. However, if our knowledge of what constitutes 'thinking' is incomplete or cannot be
- determined merely through human interaction with a machine or human then the Turing test
- comes up short in its assessment of a machine's capability of thought. The contentious
- issue for Gunderson is whether the parameters of the Turing test allow a demonstration
- sufficient enough to impute thought. Gunderson believes that this is not so.
- I, on the other hand, take a more liberal view of the Turing test. The parameters that
- Turing defined in the 'Imitation Game' can allow for games, learning of languages,
- translation of languages, cryptography, mathematics, deduction, demonstration of original
- thought, emotion and all the facets of what we term 'thinking'. The interrogator has no
- boundaries in the questions he or she might ask. Computers have had very limited success
- in this test and after a rigorous examination few have fared well even after a short time.
- I believe that this either shows that the Turing test is much more sophisticated than
- Gunderson claims or that artificial intelligence is in a very primitive state. Only the
- development and further testing of computers through the Turing test can provide answers to
- whether a machine can be termed as 'thinking'. It is also important to realize Turing's
- original purpose for creating the 'Imitation game'. It was not an answer to his original
- question of "Can machines think?". He assessed that this question was beyond his
- capabilities as a scientist to answer, since it required an ontological definition of
- 'thought'. The goal of the Turing machine was to test how well a machine was compared to a
- human in terms of its behavior. This is a different question from the original, albeit
- very similar to it. Turing's later claim, that if machines were indistinguishable from
- humans, not just by means of the 'Imitation Game', we would have to impute that they think,
- is much more contentious. Gunderson picks up on this ambiguity and challenges the Turing
- test as valid way of imputing thought in a machine. Although I do not believe that this
- was part of Turing scheme, I have delineated Gunderson's objection to the Turing test and
- further commented on them. Finally, I went beyond Gunderson's essay further examined the
- relevance of thought to Turing's 'Imitation Game'.
- 1 This essay was published in the MIND: A Quarterly Review of Psychology and Philosophy,
- Vol. LIX, No. 236, October 1950. 2 The original method involved the distinction of man and
- a woman, but the goal of determining whether a machine can imitate a human being is the
- same. 3 Hodges, Andrew. The Enigma of Intelligence: Alan Turing. Unwin;1985. pg. 415. 4
- This essay was published in Minds and Machines edited by Alan Ross Anderson. 5 Anderson,
- Alan Ross. Mind and Machines, pg. 62 6 Ince, D.C. Collected Works of A.M. Turing.
- North-Holland;1992. Pg. 133 7 Anderson, pg. 63 8 Ince, pg.136 9 Anderson, pg. 65 10 Ibid.
- pg. 66 11 Ince, pg. 135 12 Anderson, pg.69 13 Ibid. pp. 68-69 14 Ibid. pg.68 15 Anderson,
- pg. 70 16 Ince. pg.133 17 Hodges, pg.116 18 The Turing test provides unlimited mental
- interaction. As mentioned earlier, it does not allow for physical demonstrations. I do not
- believe that physical demonstrations are needed for thought to be demonstrated. Gunderson
- does not cite any examples where this is necessary either. If thought requires a
- manifestation through a human body, rather than an android one, then the Turing test is
- inadequate. I have taken note of this objection but it is not substantiated by Gunderson in
- his article. 2
-
-
-
-